10 research outputs found

    On the Convergence of WKB Approximations of the Damped Mathieu Equation

    Full text link
    Consider the differential equation mx¨+γx˙xϵcos(ωt)=0{ m\ddot{x} +\gamma \dot{x} -x\epsilon \cos(\omega t) =0}, 0tT0 \leq t \leq T. The form of the fundamental set of solutions are determined by Floquet theory. In the limit as m0m \to 0 we can apply WKB theory to get first order approximations of this fundamental set. WKB theory states that this approximation gets better as m0m \to 0 in the sense that the difference in sup norm is bounded as function of mm for a given TT. However, convergence of the periodic parts and exponential parts are not addressed. We show that there is convergence to these components. The asymptotic error for the characteristic exponents are O(m2)O(m^2) and O(m)O(m) for the periodic parts.Comment: 10 pages. version

    Overdamped dynamics of a Brownian particle levitated in a Paul trap

    Full text link
    We study the dynamics of the center of mass of a Brownian particle levitated in a Paul trap. We focus on the overdamped regime in the context of levitodynamics, comparing theory with our numerical simulations and experimental data from a nanoparticle in a Paul trap. We provide an exact analytical solution to the stochastic equation of motion, expressions for the standard deviation of the motion, and thermalization times by using the WKB method under two different limits. Finally, we prove the power spectral density of the motion can be approximated by that of an Ornstein-Uhlenbeck process and use the found expression to calibrate the motion of a trapped particle

    Graph-based methods coupled with specific distributional distances for adversarial attack detection

    Full text link
    Artificial neural networks are prone to being fooled by carefully perturbed inputs which cause an egregious misclassification. These \textit{adversarial} attacks have been the focus of extensive research. Likewise, there has been an abundance of research in ways to detect and defend against them. We introduce a novel approach of detection and interpretation of adversarial attacks from a graph perspective. For an image, benign or adversarial, we study how a neural network's architecture can induce an associated graph. We study this graph and introduce specific measures used to predict and interpret adversarial attacks. We show that graphs-based approaches help to investigate the inner workings of adversarial attacks

    Exploring continual learning strategies in artificial neural networks through graph-based analysis of connectivity: insights from a brain-inspired perspective

    No full text
    Artificial Neural Networks (ANNs) aim at mimicking information processing in biological networks. In cognitive neuroscience, graph modeling is a powerful framework widely used to study brain structural and functional connectivity. Yet, the extension of graph modeling to ANNs has been poorly explored especially in term of functional connectivity (i.e. the contextual change of the activity's units in networks). From the perspective of designing more robust and interpretable ANNs, we study how a brain-inspired graph-based approach can be extended and used to investigate their properties and behaviors. We focus our study on different continual learning strategies inspired by the human brain and modeled with ANNs. We show that graph modeling offers a simple and elegant framework to deeply investigate ANNs, compare their performances and explore deleterious behaviors such as catastrophic forgetting

    Exploring continual learning strategies in artificial neural networks through graph-based analysis of connectivity: insights from a brain-inspired perspective

    No full text
    Artificial Neural Networks (ANNs) aim at mimicking information processing in biological networks. In cognitive neuroscience, graph modeling is a powerful framework widely used to study brain structural and functional connectivity. Yet, the extension of graph modeling to ANNs has been poorly explored especially in term of functional connectivity (i.e. the contextual change of the activity's units in networks). From the perspective of designing more robust and interpretable ANNs, we study how a brain-inspired graph-based approach can be extended and used to investigate their properties and behaviors. We focus our study on different continual learning strategies inspired by the human brain and modeled with ANNs. We show that graph modeling offers a simple and elegant framework to deeply investigate ANNs, compare their performances and explore deleterious behaviors such as catastrophic forgetting

    Exploring continual learning strategies in artificial neural networks through graph-based analysis of connectivity: insights from a brain-inspired perspective

    No full text
    Artificial Neural Networks (ANNs) aim at mimicking information processing in biological networks. In cognitive neuroscience, graph modeling is a powerful framework widely used to study brain structural and functional connectivity. Yet, the extension of graph modeling to ANNs has been poorly explored especially in term of functional connectivity (i.e. the contextual change of the activity's units in networks). From the perspective of designing more robust and interpretable ANNs, we study how a brain-inspired graph-based approach can be extended and used to investigate their properties and behaviors. We focus our study on different continual learning strategies inspired by the human brain and modeled with ANNs. We show that graph modeling offers a simple and elegant framework to deeply investigate ANNs, compare their performances and explore deleterious behaviors such as catastrophic forgetting

    Exploring continual learning strategies in artificial neural networks through graph-based analysis of connectivity: insights from a brain-inspired perspective

    No full text
    Artificial Neural Networks (ANNs) aim at mimicking information processing in biological networks. In cognitive neuroscience, graph modeling is a powerful framework widely used to study brain structural and functional connectivity. Yet, the extension of graph modeling to ANNs has been poorly explored especially in term of functional connectivity (i.e. the contextual change of the activity's units in networks). From the perspective of designing more robust and interpretable ANNs, we study how a brain-inspired graph-based approach can be extended and used to investigate their properties and behaviors. We focus our study on different continual learning strategies inspired by the human brain and modeled with ANNs. We show that graph modeling offers a simple and elegant framework to deeply investigate ANNs, compare their performances and explore deleterious behaviors such as catastrophic forgetting

    Exploring continual learning strategies in artificial neural networks through graph-based analysis of connectivity: insights from a brain-inspired perspective

    No full text
    Artificial Neural Networks (ANNs) aim at mimicking information processing in biological networks. In cognitive neuroscience, graph modeling is a powerful framework widely used to study brain structural and functional connectivity. Yet, the extension of graph modeling to ANNs has been poorly explored especially in term of functional connectivity (i.e. the contextual change of the activity's units in networks). From the perspective of designing more robust and interpretable ANNs, we study how a brain-inspired graph-based approach can be extended and used to investigate their properties and behaviors. We focus our study on different continual learning strategies inspired by the human brain and modeled with ANNs. We show that graph modeling offers a simple and elegant framework to deeply investigate ANNs, compare their performances and explore deleterious behaviors such as catastrophic forgetting
    corecore